Although substantial efforts have been made using graph neural networks (GNNs) for AI-driven drug discovery (AIDD), effective molecular representation learning remains an open challenge, especially in the case of insufficient labeled molecules. Recent studies suggest that big GNN models pre-trained by self-supervised learning on unlabeled datasets enable better transfer performance in downstream molecular property prediction tasks. However, they often require large-scale datasets and considerable computational resources, which is time-consuming, computationally expensive, and environmentally unfriendly. To alleviate these limitations, we propose a novel pre-training model for molecular representation learning, Bi-branch Masked Graph Transformer Autoencoder (BatmanNet). BatmanNet features two tailored and complementary graph autoencoders to reconstruct the missing nodes and edges from a masked molecular graph. To our surprise, BatmanNet discovered that the highly masked proportion (60%) of the atoms and bonds achieved the best performance. We further propose an asymmetric graph-based encoder-decoder architecture for either nodes and edges, where a transformer-based encoder only takes the visible subset of nodes or edges, and a lightweight decoder reconstructs the original molecule from the latent representation and mask tokens. With this simple yet effective asymmetrical design, our BatmanNet can learn efficiently even from a much smaller-scale unlabeled molecular dataset to capture the underlying structural and semantic information, overcoming a major limitation of current deep neural networks for molecular representation learning. For instance, using only 250K unlabelled molecules as pre-training data, our BatmanNet with 2.575M parameters achieves a 0.5% improvement on the average AUC compared with the current state-of-the-art method with 100M parameters pre-trained on 11M molecules.
translated by 谷歌翻译
Backdoor attacks represent one of the major threats to machine learning models. Various efforts have been made to mitigate backdoors. However, existing defenses have become increasingly complex and often require high computational resources or may also jeopardize models' utility. In this work, we show that fine-tuning, one of the most common and easy-to-adopt machine learning training operations, can effectively remove backdoors from machine learning models while maintaining high model utility. Extensive experiments over three machine learning paradigms show that fine-tuning and our newly proposed super-fine-tuning achieve strong defense performance. Furthermore, we coin a new term, namely backdoor sequela, to measure the changes in model vulnerabilities to other attacks before and after the backdoor has been removed. Empirical evaluation shows that, compared to other defense methods, super-fine-tuning leaves limited backdoor sequela. We hope our results can help machine learning model owners better protect their models from backdoor threats. Also, it calls for the design of more advanced attacks in order to comprehensively assess machine learning models' backdoor vulnerabilities.
translated by 谷歌翻译
Fusion-in-Decoder (FiD) is a powerful retrieval-augmented language model that sets the state-of-the-art on many knowledge-intensive NLP tasks. However, FiD suffers from very expensive inference. We show that the majority of inference time results from memory bandwidth constraints in the decoder, and propose two simple changes to the FiD architecture to speed up inference by 7x. The faster decoder inference then allows for a much larger decoder. We denote FiD with the above modifications as FiDO, and show that it strongly improves performance over existing FiD models for a wide range of inference budgets. For example, FiDO-Large-XXL performs faster inference than FiD-Base and achieves better performance than FiD-Large.
translated by 谷歌翻译
Conventional closed-world information extraction (IE) approaches rely on human ontologies to define the scope for extraction. As a result, such approaches fall short when applied to new domains. This calls for systems that can automatically infer new types from given corpora, a task which we refer to as type discovery. To tackle this problem, we introduce the idea of type abstraction, where the model is prompted to generalize and name the type. Then we use the similarity between inferred names to induce clusters. Observing that this abstraction-based representation is often complementary to the entity/trigger token representation, we set up these two representations as two views and design our model as a co-training framework. Our experiments on multiple relation extraction and event extraction datasets consistently show the advantage of our type abstraction approach. Code available at https://github.com/raspberryice/type-discovery-abs.
translated by 谷歌翻译
应用于物理工程系统的纯粹数据驱动的深神经网络(DNN)可以推断出违反物理定律的关系,从而导致意外后果。为了应对这一挑战,我们提出了一个基于物理模型的DNN框架,即Phy-Taylor,该框架以物理知识加速了学习合规的表示。 Phy-Taylor框架做出了两个关键的贡献。它引入了一个新的建筑物理兼容神经网络(PHN),并具有新颖的合规机制,我们称{\ em物理学引导的神经网络编辑\/}。 PHN的目的是直接捕获受物质量的启发的非线性,例如动能,势能,电力和空气动力阻力。为此,PHN增强了具有两个关键组成部分的神经网络层:(i)泰勒级数序列扩展的非线性功能捕获物理知识的扩展,以及(ii)缓解噪声影响的抑制器。神经网络编辑机制进一步修改了网络链接和激活功能与物理知识一致。作为扩展,我们还提出了一个自我校正的Phy-Taylor框架,该框架介绍了两个其他功能:(i)基于物理模型的安全关系学习,以及(ii)在违反安全性的情况下自动输出校正。通过实验,我们表明(通过直接表达难以学习的非线性并通过限制依赖性)Phy-Taylor的特征较少的参数和明显加速的训练过程,同时提供增强的模型稳健性和准确性。
translated by 谷歌翻译
从新闻文章中提取事件的信息论点是信息提取的一个具有挑战性的问题,这需要对每个文档的全球上下文理解。尽管有关文档级提取的最新工作已经超越了单句子,并提高了端到端模型的跨句子推理能力,但它们仍然受到某些输入序列长度约束的限制,通常忽略事件之间的全局上下文。为了解决此问题,我们通过构建文档存储器存储来记录上下文事件信息,并利用它隐含,明确地帮助解码以后事件的参数,从而引入了一个新的基于全局神经生成的框架,以用于文档级事件参数提取提取文档级别的事件参数提取。经验结果表明,我们的框架的表现要优于先验方法,并且使用约束的解码设计对对抗注释的示例更为强大。 (我们的代码和资源可在https://github.com/xinyadu/memory_docie上获得研究目的。)
translated by 谷歌翻译
联合学习(FL)提供了一个有效的范式,可以共同培训分布式用户的数据的全球模型。由于本地培训数据来自可能不值得信赖的不同用户,因此一些研究表明,FL容易受到中毒攻击的影响。同时,为了保护本地用户的隐私,FL始终以差异性私人方式(DPFL)进行培训。因此,在本文中,我们问:我们是否可以利用DPFL的先天隐私权来提供对中毒攻击的认证鲁棒性?我们可以进一步改善FL的隐私以改善这种认证吗?我们首先研究了FL的用户级和实例级别的隐私,并提出了新的机制以获得改进的实例级隐私。然后,我们提供两个鲁棒性认证标准:两级DPFL的认证预测和认证攻击成本。从理论上讲,我们证明了DPFL在有限数量的对抗用户或实例下的认证鲁棒性。从经验上讲,我们进行了广泛的实验,以在对不同数据集的一系列攻击下验证我们的理论。我们表明,具有更严格的隐私保证的DPFL总是在认证攻击成本方面提供更强的鲁棒性认证,但是在隐私保护和公用事业损失之间的适当平衡下,获得了最佳认证预测。
translated by 谷歌翻译
机器学习的最新进展使其在不同领域的广泛应用程序,最令人兴奋的应用程序之一是自动驾驶汽车(AV),这鼓励了从感知到预测到计划的许多ML算法的开发。但是,培训AV通常需要从不同驾驶环境(例如城市)以及不同类型的个人信息(例如工作时间和路线)收集的大量培训数据。这种收集的大数据被视为以数据为中心的AI时代的ML新油,通常包含大量对隐私敏感的信息,这些信息很难删除甚至审核。尽管现有的隐私保护方法已经取得了某些理论和经验成功,但将它们应用于自动驾驶汽车等现实世界应用时仍存在差距。例如,当培训AVS时,不仅可以单独识别的信息揭示对隐私敏感的信息,还可以揭示人口级别的信息,例如城市内的道路建设以及AVS的专有商业秘密。因此,重新审视AV中隐私风险和相应保护方法的前沿以弥合这一差距至关重要。遵循这一目标,在这项工作中,我们为AVS中的隐私风险和保护方法提供了新的分类法,并将AV中的隐私分为三个层面:个人,人口和专有。我们明确列出了保护每个级别的隐私级别,总结这些挑战的现有解决方案,讨论课程和结论,并为研究人员和从业者提供潜在的未来方向和机会。我们认为,这项工作将有助于塑造AV中的隐私研究,并指导隐私保护技术设计。
translated by 谷歌翻译
随着自动驾驶汽车(AV)开发的发展,对环境中乘客和代理商的安全性的担忧已经上升。涉及自主控制车辆的每个现实世界交通碰撞都使这种担忧加剧了。开源自主驾驶实现显示了具有复杂相互依赖任务的软件体系结构,这很大程度上依赖于机器学习和深层神经网络(DNN),这些任务容易受到非确定性故障和角落案例的影响。这些复杂的子系统共同履行AV的任务,同时还保持安全性。尽管在提高对这些系统的经验可靠性和信心方面正在做出重大改进,但DNN验证的固有局限性在提供AV中提供确定性安全保证方面却引起了无法克服的挑战。我们提出了协同冗余(SR),这是一种用于复杂网络物理系统的安全架构,例如AV。 SR通过将系统的任务和安全任务解耦来提供可验证的安全保证。在独立履行其主要角色的同时,部分功能多余的任务和安全任务能够相互帮助,从而协同改善合并的系统。协同安全层仅使用可验证且可分析的软件来完成其任务。与任务层的密切协调可以更轻松,更早地检测系统中的紧急故障。 SR简化了任务层的优化目标并改进了其设计。 SR提供了高性能的安全部署,尽管本质上无法验证的机器学习软件。在这项工作中,我们首先介绍SR体系结构的设计和功能,然后评估解决方案的功效,重点关注AV中障碍物存在故障的关键问题。
translated by 谷歌翻译
对障碍的看法仍然是自动驾驶汽车的关键安全问题。现实世界中的碰撞表明,导致致命碰撞的自治缺陷源于障碍物的存在。开源自主驾驶实现显示了具有复杂相互依存的深神经网络的感知管道。这些网络无法完全验证,使其不适合安全至关重要的任务。在这项工作中,我们介绍了现有的基于LIDAR的经典障碍物检测算法的安全验证。我们对该障碍检测算法的功能建立了严格的界限。考虑到安全标准,这种界限允许确定可以可靠地满足标准的激光雷达传感器属性。对于基于神经网络的感知系统,此类分析尚未实现。我们对障碍检测系统进行了严格的分析,并基于现实世界传感器数据提供了经验结果。
translated by 谷歌翻译